Planning under uncertainty is a significant challenge in robotics, as robots often rely on incomplete geometric information due to sensing limitations. For instance, a robot might only have a partial view of an object, leading to collision with unseen parts. To address this, the ELPIS lab is conducting research in effective motion planning under uncertainty.
Model-based robotic planning techniques, such as inverse kinematics and motion planning, can endow robots with the ability to perform complex manipulation tasks, such as grasping, object manipulation, and precise placement. However, these methods often assume perfect world knowledge and leverage approximate world models. For example, tasks that involve dynamics such as pushing or pouring are difficult to address with model-based techniques as it is difficult to obtain accurate characterizations of these object dynamics. Additionally, uncertainty in perception prevents them populating an accurate world state estimate. In this work, we propose using a model-based motion planner to build an ensemble of plans under different environment hypotheses. Then, we train a meta-policy to decide online which plan to track based on the current history of observations. By leveraging history, this policy is able to switch ensemble plans to circumvent getting “stuck” in order to complete the task. We tested our method on a 7-DOF Franka-Emika robot pushing a cabinet door in simulation. We demonstrate that a successful meta-policy can be trained to push a door in settings high environment uncertainty all while requiring little data.
Motion planning is a core problem in robotics, with a range of existing methods aimed to address its diverse set of challenges. However, most existing methods rely on complete knowledge of the robot environment; an assumption that seldom holds true due to inherent limitations of robot perception. To enable tractable motion planning for high-DOF robots under partial observability, we introduce BLIND, an algorithm that leverages human guidance. BLIND utilizes inverse reinforcement learning to derive motion-level guidance from human critiques. The algorithm overcomes the computational challenge of reward learning for high-DOF robots by projecting the robot’s continuous configuration space to a motion-planner-guided discrete task model. The learned reward is in turn used as guidance to generate robot motion using a novel motion planner. We demonstrate BLIND using the Fetch robot an dperform two simulation experiments with partial observability. Our experiments demonstrate that, despite the challenge of partial observability and high dimensionality, BLIND is capable of generating safe robot motion and outperforms baselines on metrics of teaching efficiency, success rate, and path quality.
Recently, there has been a wealth of development in motion planning for robotic manipulationnew motion planners are continuously proposed, each with its own unique set of strengths and weaknesses. However, evaluating these new planners is challenging, and researchers often create their own ad-hoc problems for benchmarking, which is time-consuming, prone to bias, and does not directly compare against other state-of-the-art planners. We present MotionBenchMaker, an open-source tool to generate benchmarking datasets for realistic robot manipulation problems. MotionBenchMaker is designed to be an extensible, easy-to-use tool that allows users to both generate datasets and benchmark them by comparing motion planning algorithms. Empirically, we show the benefit of using MotionBenchMaker as a tool to procedurally generate datasets which helps in the fair evaluation of planners. We also present a suite of over 40 prefabricated datasets, with 5 different commonly used robots in 8 environments, to serve as a common ground for future motion planning research.
Motion planning is a core problem in many applications spanning from robotic manipulation to autonomous driving. Given its importance, several schools of methods have been proposed to address the motion planning problem. However, most existing solutions require complete knowledge of the robot’s environment; an assumption that might not be valid in many real-world applications due to occlusions and inherent limitations of robots’ sensors. Indeed, relatively little emphasis has been placed on developing safe motion planning algorithms that work in partially unknown environments. In this work, we investigate how a human who can observe the robot’s workspace can enable motion planning for a robot with incomplete knowledge of its workspace. We propose a framework that combines machine learning and motion planning to address the challenges of planning motions for high-dimensional robots that learn from human interaction. Our preliminary results indicate that the proposed framework can successfully guide a robot in a partially unknown environment quickly discovering feasible paths.